-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Allow updating configuration of changes tries #3201
Conversation
Duplicating IMPORTANT NOTE here: please only merge this PR if you sure that we do not need code for db upgrade here. I haven't did that - not sure if it is required. If merged as-is, this PR will break all existing databases => nodes will be unable to start with old databases. Please respond if you think we should start database versioning stuff. |
will this affect typical deployments (like kusama nodes)? |
If you're talking about breaking db - yes, this will affect all nodes. So you can't just copy binary over previous - we need to erase db && then (if there's another source of blocks on the network), it'll sync from the scratch. |
@svyatonik will nodes with this patch be able to sync from nodes without it? |
@demimarie-parity Yes, it changes nothing in how blocks on existing chains are processed. |
how much trouble would a DB migration be? we have a semi-production network of ~500 nodes out there and i don't really want to have to force them all to resync on the next minor revision.. |
Re upgrade performance - I'm not sure yet. Re code - shouldn't be a big issue, I just thought that maybe we do not need it yet. We have changed db format several times, but there never been any upgrade code. But it was probably been before Kusama. So I'll add db upgrade here. Will try to keep it localized in single commit to make it easy reviewable. |
(or perhaps will open additional PR to the |
this one shouldn't be merged without migration code, so probably best if it's a single commit on top of this. |
This commit implements database upgrade. The only upgrade now is from v0 to v1, which: (1) adds new column to the database and (2) initializes changes tries configuration cache (with I also forgot that I've promised to sync Kusama with these changes in some of comments above => will temporary switch to |
Had a quick peak on the db commit. Things looks good (considering meta column is the same between light and full).
|
Thanks for taking a time to review! I agree that function like As for the second - I'm not sure we are exposing anything internal from database to clients. The only exception is aux storage, but it already has its own upgrade path - see e.g. |
So I've finally synced both Kusama clients (full/light) with this PR - everything seems fine for me. |
IMPORTANT NOTES FOR REVIEWERS
=================================================================
Follow up of #2840 (comment)
This PR introduces new changes-tries related digest item (
ChangesTrieSignal
) + separate SRML method that is used to change CT configuration (and emits this signal). It is also possible to use the same signal for emergency creation of max-level digest CT, though it'll be implemented in next PRs.The idea is to use consensus cache (which from now shouldn't be called consensus cache anymore) to hold intervals (and also config itself) where CT configuration hasn't be changed. When building CT, you only need actual CT configuration => work is done within single interval. When fetching key changes/proof, this may affect several intervals => every interval will have its own
state_machine/src/changes_trie/*
related call.Detailed description:
NeverPrune
andByDepth(N)
;well_known_cache_keys
fromconsensus_common
to theclient::backend
;DbChangesTrieStorage
) from the db/src/lib.rs into dedicated file db/src/changes_tries_storage.rs. Now it holds reference to the cache where configuration changes are stored;struct ChangesTrieState
instead of simplyChangesTrieStorage
. In addition to storage reference, it holds: active changes trie configuration and zero block of this configuration (i.e. parent block of the block where first changes trie is built using this configuration). This is required to build changes trie at current block;Client::key_changes
andClient::key_changes_proof
are updated to support ranges where CT configuration has been changed.Client::max_key_changes_range
isn't updated yet, because it tightly related to pruning (see below);SurfaceIterator
is extracted to dedicated state_machine/src/changes_trie/surface_iterator.rs;build.rs
, max level digest is created. It may be a skewed digest (the digest that is created ahead of schedule), or a normal digest.Changes tries are pruned similarly to what has been before - i.e. there's a block for which we guarantee that all changes tries (if created) after this block are valid (i.e. if we prune some trie, then the digest trie that points to it, is also pruned) and all changes tries before this block should not be accessed. Since we now have tries that are built using different configurations, we store this block in special metadata entry (because it isn't that easy to calculate it).